Members
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: Research Program

Perception and Situation Awareness

Robust perception and decision-making in open and dynamic environments populated by human beings is an open and challenging scientific problem. Traditional perception techniques do not provide an adequate solution for this problems, mainly because such environments are uncontrolled (partially unknown and open) and exhibit strong constraints to be satisfied (in particular high dynamicity and strong uncertainty). This means that the proposed solutions have to simultaneously take into account characteristics such as real time processing, temporary occultations, dynamic changes or motion predictions; these solutions have also to include explicit models for reasoning about uncertainty (data incompleteness, sensing errors, hazards of the physical world).

Sensor fusion

In the context of autonomous navigation we investigate sensor fusion problems when sensors and robots have limited capacities. This relates to the general study of the minimal condition for observability.

A special attention is devoted to the fusion of inertial and monocular vision sensors. We are particularly interested in closed-form solutions, i.e., solutions able to determine the state only in terms of the measurements obtained during a short time interval. This is fundamental in robotics since such solutions do not need initialization. For the fusion of visual and inertial measurements we have recently obtained such closed-form solutions in [41] and [44] . This work is currently supported by our ANR project VIMAD (Navigation autonome des drones aériens avec la fusion des données visuelles et inertielles, lead by A. Martinelli, Chroma.).

We are also interested in understanding the observability properties of these sensor fusion problems. In other words, for a given sensor fusion problem, we want to obtain the physical quantities that the sensor measurements allow us to estimate. This is a fundamental step in order to properly define the state to be estimated. To achieve this goal, we apply standard analytic tools developed in control theory together with the concept of continuous symmetry recently introduced by the emotion team [40] . In order to take into account the presence of disturbances, we introduce general analytic tools able to derive the observability properties in the nonlinear case when some of the system inputs are unknown (and act as disturbances).

Figure 1. Illustrations a) HSBOF model b) Risk-RRT planning with humans c) simulating humans and robots.
IMG/illustrations.jpg

Bayesian perception

In previous work carried out in the eMotion team, we have proposed a new paradigm in robotics called “Bayesian Perception”. The foundation of this approach relies on the concept of “Bayesian Occupancy Filter (BOF)” initially proposed in the PhD thesis of Christophe Coué [28] and further developed in the team [36] . The basic idea is to combine à Bayesian filter with a probabilistic grid representation of both the space and the motions, see illustration Fig. 1 .a. This model allows the filtering and the fusion of heterogeneous and uncertain sensors data, and takes into account the history of the sensors measurements, a probabilistic model of the sensors and of the uncertainty, and a dynamic model of the observed objects motions. Current and future work on this research axis addresses two complementary issues:

Situation Awareness & Bayesian Decision-making

Prediction is an important ability for navigation in dynamic uncertain environments, in particular of the evolution of the perceived actors for making on-line safe decisions (concept of “Bayesian Decision-making”). We have recently shown that an interesting property of the Bayesian Perception approach is to generate short-term conservative (i.e. when motion parameters are supposed to be stable during a small amount of time) predictions on the likely future evolution of the observed scene, even if the sensing information is temporary incomplete or not available [46] . But in human populated environments, estimating more abstract properties (e.g. object classes, affordances, agents intentions) is also crucial to understand the future evolution of the scene. Our current and future work in this research axis focus on two complementary issues :